240 research outputs found

    Actions travel with their objects: evidence for dynamic event files

    Get PDF
    Moving a visual object is known to lead to an update of its cognitive representation. Given that object representations have also been shown to include codes describing the actions they were accompanied by, we investigated whether these action codes “move” along with their object. We replicated earlier findings that repeating stimulus and action features enhances performance if other features are repeated, but attenuates performance if they alternate. However, moving the objects in which the stimuli appeared in between two stimulus presentations had a strong impact on the feature bindings that involved location. Taken together, our findings provide evidence that changing the location of an object leaves two memory traces, one referring to its original location (an episodic record) and another referring to the new location (a working-memory trace)

    Motion and position shifts induced by the double-drift stimulus are unaffected by attentional load.

    Get PDF
    The double-drift stimulus produces a strong shift in apparent motion direction that generates large errors of perceived position. In this study, we tested the effect of attentional load on the perceptual estimates of motion direction and position for double-drift stimuli. In each trial, four objects appeared, one in each quadrant of a large screen, and they moved upward or downward on an angled trajectory. The target object whose direction or position was to be judged was either cued with a small arrow prior to object motion (low attentional load condition) or cued after the objects stopped moving and disappeared (high attentional load condition). In Experiment 1, these objects appeared 10° from the central fixation, and participants reported the perceived direction of the target's trajectory after the stimulus disappeared by adjusting the direction of an arrow at the center of the response screen. In Experiment 2, the four double-drift objects could appear between 6 ° and 14° from the central fixation, and participants reported the location of the target object after its disappearance by moving the position of a small circle on the response screen. The errors in direction and position judgments showed little effect of the attentional manipulation-similar errors were seen in both experiments whether or not the participant knew which double-drift object would be tested. This suggests that orienting endogenous attention (i.e., by only attending to one object in the precued trials) does not interact with the strength of the motion or position shifts for the double-drift stimulus

    Visual Learning in Multiple-Object Tracking

    Get PDF
    Tracking moving objects in space is important for the maintenance of spatiotemporal continuity in everyday visual tasks. In the laboratory, this ability is tested using the Multiple Object Tracking (MOT) task, where participants track a subset of moving objects with attention over an extended period of time. The ability to track multiple objects with attention is severely limited. Recent research has shown that this ability may improve with extensive practice (e.g., from action videogame playing). However, whether tracking also improves in a short training session with repeated trajectories has rarely been investigated. In this study we examine the role of visual learning in multiple-object tracking and characterize how varieties of attention interact with visual learning.Participants first conducted attentive tracking on trials with repeated motion trajectories for a short session. In a transfer phase we used the same motion trajectories but changed the role of tracking targets and nontargets. We found that compared with novel trials, tracking was enhanced only when the target subset was the same as that used during training. Learning did not transfer when the previously trained targets and nontargets switched roles or mixed up. However, learning was not specific to the trained temporal order as it transferred to trials where the motion was played backwards.These findings suggest that a demanding task of tracking multiple objects can benefit from learning of repeated motion trajectories. Such learning potentially facilitates tracking in natural vision, although learning is largely confined to the trajectories of attended objects. Furthermore, we showed that learning in attentive tracking relies on relational coding of all target trajectories. Surprisingly, learning was not specific to the trained temporal context, probably because observers have learned motion paths of each trajectory independently of the exact temporal order

    Speed has an effect on multiple-object tracking independently of the number of close encounters between targets and distractors

    Get PDF
    Multiple-object tracking (MOT) studies have shown that tracking ability declines as object speed increases. However, this might be attributed solely to the increased number of times that target and distractor objects usually pass close to each other (“close encounters”) when speed is increased, resulting in more target–distractor confusions. The present study investigates whether speed itself affects MOT ability by using displays in which the number of close encounters is held constant across speeds. Observers viewed several pairs of disks, and each pair rotated about the pair’s midpoint and, also, about the center of the display at varying speeds. Results showed that even with the number of close encounters held constant across speeds, increased speed impairs tracking performance, and the effect of speed is greater when the number of targets to be tracked is large. Moreover, neither the effect of number of distractors nor the effect of target–distractor distance was dependent on speed, when speed was isolated from the typical concomitant increase in close encounters. These results imply that increased speed does not impair tracking solely by increasing close encounters. Rather, they support the view that speed affects MOT capacity by requiring more attentional resources to track at higher speeds

    Range dependent processing of visual numerosity: similarities across vision and haptics

    Get PDF
    ‘Subitizing’ refers to fast and accurate judgement of small numerosities, whereas for larger numerosities either counting or estimation are used. Counting is slow and precise, whereas estimation is fast but imprecise. In this study consisting of five experiments we investigated if and how the numerosity judgement process is affected by the relative spacing between the presented numerosities. To this end we let subjects judge the number of dots presented on a screen and recorded their response times. Our results show that subjects switch from counting to estimation if the relative differences between subsequent numerosities are large (a factor of 2), but that numerosity judgement in the subitizing range was still faster. We also show this fast performance for small numerosities only occurred when numerosity information is present. This indicates this is typical for number processing and not magnitude estimation in general. Furthermore, comparison with a previous haptic study suggests similar processing in numerosity judgement through haptics and vision

    Facilitating Stable Representations: Serial Dependence in Vision

    Get PDF
    We tested whether the intervening time between multiple glances influences the independence of the resulting visual percepts. Observers estimated how many dots were present in brief displays that repeated one, two, three, four, or a random number of trials later. Estimates made farther apart in time were more independent, and thus carried more information about the stimulus when combined. In addition, estimates from different visual field locations were more independent than estimates from the same location. Our results reveal a retinotopic serial dependence in visual numerosity estimates, which may be a mechanism for maintaining the continuity of visual perception in a noisy environment

    Grip Force Reveals the Context Sensitivity of Language-Induced Motor Activity during “Action Words

    Get PDF
    Studies demonstrating the involvement of motor brain structures in language processing typically focus on \ud time windows beyond the latencies of lexical-semantic access. Consequently, such studies remain inconclusive regarding whether motor brain structures are recruited directly in language processing or through post-linguistic conceptual imagery. In the present study, we introduce a grip-force sensor that allows online measurements of language-induced motor activity during sentence listening. We use this tool to investigate whether language-induced motor activity remains constant or is modulated in negative, as opposed to affirmative, linguistic contexts. Our findings demonstrate that this simple experimental paradigm can be used to study the online crosstalk between language and the motor systems in an ecological and economical manner. Our data further confirm that the motor brain structures that can be called upon during action word processing are not mandatorily involved; the crosstalk is asymmetrically\ud governed by the linguistic context and not vice versa

    Temporal estimation with two moving objects: overt and covert pursuit

    Get PDF
    The current study examined temporal estimation in a prediction motion task where participants were cued to overtly pursue one of two moving objects, which could either arrive first, i.e., shortest [time to contact (TTC)] or second (i.e., longest TTC) after a period of occlusion. Participants were instructed to estimate TTC of the first-arriving object only, thus making it necessary to overtly pursue the cued object while at the same time covertly pursuing the other (non-cued) object. A control (baseline) condition was also included in which participants had to estimate TTC of a single, overtly pursued object. Results showed that participants were able to estimate the arrival order of the two objects with very high accuracy irrespective of whether they had overtly or covertly pursued the first-arriving object. However, compared to the single-object baseline, participants’ temporal estimation of the covert object was impaired when it arrived 500 ms before the overtly pursued object. In terms of eye movements, participants exhibited significantly more switches in gaze location during occlusion from the cued to the non-cued object but only when the latter arrived first. Still, comparison of trials with and without a switch in gaze location when the non-cued object arrived first indicated no advantage for temporal estimation. Taken together, our results indicate that overt pursuit is sufficient but not necessary for accurate temporal estimation. Covert pursuit can enable representation of a moving object’s trajectory and thereby accurate temporal estimation providing the object moves close to the overt attentional focus
    corecore